6 research outputs found

    Joint Prediction of Depths, Normals and Surface Curvature from RGB Images using CNNs

    Full text link
    Understanding the 3D structure of a scene is of vital importance, when it comes to developing fully autonomous robots. To this end, we present a novel deep learning based framework that estimates depth, surface normals and surface curvature by only using a single RGB image. To the best of our knowledge this is the first work to estimate surface curvature from colour using a machine learning approach. Additionally, we demonstrate that by tuning the network to infer well designed features, such as surface curvature, we can achieve improved performance at estimating depth and normals.This indicates that network guidance is still a useful aspect of designing and training a neural network. We run extensive experiments where the network is trained to infer different tasks while the model capacity is kept constant resulting in different feature maps based on the tasks at hand. We outperform the previous state-of-the-art benchmarks which jointly estimate depths and surface normals while predicting surface curvature in parallel

    Open for Learning:Using Open Data Tools and Techniques to Support Student Learning

    Get PDF
    In this note we present a notion of harmonic oscillator on the Heisenberg group H-n which forms the natural analogue of the harmonic oscillator on R-n under a few reasonable assumptions: the harmonic oscillator on H-n should be a negative sum of squares of operators related to the sub-Laplacian on H-n, essentially self-adjoint with purely discrete spectrum, and its eigenvectors should be smooth functions and form an orthonormal basis of L-2(H-n). This approach leads to a differential operator on H-n which is determined by the (stratified) Dynin-Folland Lie algebra. We provide an explicit expression for the operator as well as an asymptotic estimate for its eigenvalues

    Look no deeper:Recognizing places from opposing viewpoints under varying scene appearance using single-view depth estimation

    No full text
    Visual place recognition (VPR) - the act of recognizing a familiar visual place - becomes difficult when there is extreme environmental appearance change or viewpoint change. Particularly challenging is the scenario where both phenomena occur simultaneously, such as when returning for the first time along a road at night that was previously traversed during the day in the opposite direction. While such problems can be solved with panoramic sensors, humans solve this problem regularly with limited field of view vision and without needing to constantly turn around. In this paper, we present a new depth- and temporal-aware visual place recognition system that solves the opposing viewpoint, extreme appearance-change visual place recognition problem. Our system performs sequence-to-single matching by extracting depth-filtered keypoints using a state-of-the-art depth estimation pipeline, constructing a keypoint sequence over multiple frames from the reference dataset, and comparing those keypoints to those in a single query image. We evaluate the system on a challenging benchmark dataset and show that it consistently outperforms state-of-the-art techniques. We also develop a range of diagnostic simulation experiments that characterize the contribution of depth-filtered keypoint sequences with respect to key domain parameters including degree of appearance change and camera motion
    corecore